naturligt sprog
Chegg Embraced AI. ChatGPT Ate Its Lunch Anyway
Investors were surprised when the online education company Chegg last month revealed that ChatGPT was hurting subscriber growth--the company lost half of its market value overnight. But long before Chegg became an index case for the disruptive force of ChatGPT, its top brass had heard plenty of warnings about the threat and opportunity of generative AI. For years, on afternoon walks outside Chegg's Silicon Valley headquarters, former executives say they had discussed someday slashing costs by tapping AI programs to replace an army of instructors that answer student questions and draft flashcards. Matthew Ramirez, a product leader who left Chegg two years ago, says he even advised CEO Dan Rosensweig in 2020 that generative AI would be the bus that ran down Chegg if it didn't prepare itself. And just weeks after OpenAI launched ChatGPT last November, a source familiar with the exchange says, one Chegg executive had the bot write an email to Rosensweig urging him to develop a ChatGPT rival.
- Education > Educational Setting > Online (0.57)
- Banking & Finance > Trading (0.37)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.83)
If Pinocchio Doesn't Freak You Out, Microsoft's Sydney Shouldn't Either
In November 2018, an elementary school administrator named Akihiko Kondo married Miku Hatsune, a fictional pop singer. The couple's relationship had been aided by a hologram machine that allowed Kondo to interact with Hatsune. When Kondo proposed, Hatsune responded with a request: "Please treat me well." The couple had an unofficial wedding ceremony in Tokyo, and Kondo has since been joined by thousands of others who have also applied for unofficial marriage certificates with a fictional character. Though some raised concerns about the nature of Hatsune's consent, nobody thought she was conscious, let alone sentient.
She Built an App to Block Harassment on Twitter. Elon Musk Killed It
Tracy Chou launched the Twitter app Block Party in 2021 to help users escape targeted harassment campaigns that she--as an Asian American woman--knew from personal experience could ostracize vulnerable voices from the public conversation. But on Wednesday Block Party closed its doors, becoming the latest victim of soaring new bills imposed by a struggling Twitter under new owner Elon Musk. Under Twitter's former ownership, Chou struck a deal with the company for free access to data--a win-win arrangement that would allow Block Party to grow and provide Twitter with a valuable anti-harassment tool to which it didn't have to devote expensive engineering time. But Chou tells TIME that following the recent expiration of that contract, Twitter wanted Block Party to pay $42,000 per month for access to enough data to keep the app running. There was no way Block Party could afford the figure, she says.
US eating disorder helpline takes down AI chatbot over harmful advice
The National Eating Disorder Association (Neda) has taken down an artificial intelligence chatbot, "Tessa", after reports that the chatbot was providing harmful advice. Neda has been under criticism over the last few months after it fired four employees in March who worked for its helpline and had formed a union. The helpline allowed people to call, text or message volunteers who offered support and resources to those concerned about an eating disorder. Members of the union, Helpline Associates United, say they were fired days after their union election was certified. The union has filed unfair labor practice charges with the National Labor Relations Board.
AI Is Not an Arms Race
The window of what AI can't do seems to be contracting week by week. Machines can now write elegant prose and useful code, ace exams, conjure exquisite art, and predict how proteins will fold. Last summer I surveyed more than 550 AI researchers, and nearly half of them thought that, if built, high-level machine intelligence would lead to impacts that had at least a 10% chance of being "extremely bad (e.g. On May 30, hundreds of AI scientists, along with the CEOs of top AI labs like OpenAI, DeepMind and Anthropic, signed a statement urging caution on AI: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." The simplest argument is that progress in AI could lead to the creation of superhumanly-smart artificial "people" with goals that conflict with humanity's interests--and the ability to pursue them autonomously.
The Leak That Has Big Tech and Regulators Panicked
In February, Meta released its large language model: LLaMA. Unlike OpenAI and its ChatGPT, Meta didn't just give the world a chat window to play with. Instead, it released the code into the open-source community, and shortly thereafter the model itself was leaked. Researchers and programmers immediately started modifying it, improving it, and getting it to do things no one else anticipated. And their results have been immediate, innovative, and an indication of how the future of this technology is going to play out.
- Law > Statutes (0.85)
- Government (0.84)
Nvidia: chipmaker's strategic AI moves result in a tech position of power
Nvidia saw its valuation soar to $1tn on Tuesday, making it the fifth most valuable American company and one of the first major corporate beneficiaries of the hype around AI. The chipmaker has been a major and in some cases dominant player in several industries for years. But no development has raised its profile – and its potential windfall – as much as the current excitement around generative AI. Nvidia has been around for 30 years. The company got its start in 1993 building graphics processing units (GPUs) for video games.
- North America > United States > New York > New York County > New York City (0.06)
- Europe > Switzerland (0.05)
- Asia > Taiwan (0.05)
- Information Technology > Hardware (1.00)
- Leisure & Entertainment > Games > Computer Games (0.35)
- Transportation > Ground > Road (0.30)
Risk of extinction by AI should be 'global priority', say tech experts
A group of leading technology experts from across the globe have warned that artificial intelligence technology should be considered a societal risk and prioritised in the same class as pandemics and nuclear wars. The brief statement, signed by hundreds of tech executives and academics, was released by the Center for AI Safety on Tuesday amid growing concerns over regulation and risks the technology poses to humanity. "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," the statement said. Signatories included the chief executives from Google's DeepMind, the ChatGPT developer OpenAI and AI startup Anthropic. The statement comes as global leaders and industry experts – such as the leaders of OpenAI – have made calls for regulation of the technology amid existential fears the technology could significantly affect job markets, harm the health of millions, and weaponise disinformation, discrimination and impersonation.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.83)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.49)
The biggest problem in AI? Lying chatbots
Companies are also spending time and money improving their models by testing them with real people. A technique called reinforcement learning with human feedback, where human testers manually improve a bot's answers and then feed them back into the system to improve it, is widely credited with making ChatGPT so much better than chatbots that came before it. A popular approach is to connect chatbots up to databases of factual or more trustworthy information, such as Wikipedia, Google search or bespoke collections of academic articles or business documents.
When does Windows Copilot launch? Here's everything you need to know.
As you've undoubtedly noticed, AI-related news is everywhere, and its influence continues to grow. Just last week, OpenAI released an iOS version of ChatGPT (an Android version is coming soon) that runs directly on your iPhone and adds the ability to speak your request for information into its interactive chatbot user interface. Now, Microsoft has announced that it's bringing a range of new generative AI-powered features to Windows 11 starting in June. The main component is called Windows Copilot, a set of text-driven assistive capabilities that make using your PC easier and more intuitive. The company also announced the ability to integrate Bing Chat plug-ins into Windows, meaning that many of the impressive capabilities Microsoft brought to its Bing search engine will be available directly in Windows.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.63)